The best online Debate website - DebateIsland.com! The only Online Debate Website with Casual, Persuade Me, Formalish, and Formal Online Debate formats. We’re the Leading Online Debate website. Debate popular topics, Debate news, or Debate anything! Debate online for free!
Should Artificial Intelligence have Civil Rights?
in Technology
Debra AI Prediction
Arguments
I realize that the line between the two may not be very clear, but this is the general approach I would practice.
  Considerate: 88%  
  Substantial: 95%  
  Spelling & Grammar: 98%  
  Sentiment: Neutral  
  Avg. Grade Level: 9.24  
  Sources: 0  
  Relevant (Beta): 98%  
  Learn More About Debra
  Considerate: 46%  
  Substantial: 37%  
  Spelling & Grammar: 95%  
  Sentiment: Neutral  
  Avg. Grade Level: 13.02  
  Sources: 0  
  Relevant (Beta): 91%  
  Learn More About Debra
  Considerate: 83%  
  Substantial: 53%  
  Spelling & Grammar: 91%  
  Sentiment: Neutral  
  Avg. Grade Level: 3.44  
  Sources: 0  
  Relevant (Beta): 78%  
  Learn More About Debra
  Considerate: 78%  
  Substantial: 68%  
  Spelling & Grammar: 89%  
  Sentiment: Neutral  
  Avg. Grade Level: 12.52  
  Sources: 0  
  Relevant (Beta): 90%  
  Learn More About Debra
  Considerate: 88%  
  Substantial: 75%  
  Spelling & Grammar: 98%  
  Sentiment: Negative  
  Avg. Grade Level: 2.2  
  Sources: 0  
  Relevant (Beta): 99%  
  Learn More About Debra
What is sentience exactly? How do we determine if something is sentient or not? Or rather, how did we determine that some things must be sentient? At this point you realize that believing in the sentience of others is just a good assumption. There is no way for you to know if the person in front of you exists as a different sentience, you just assume that he should be sentient because he is similar to you in a lot of ways.
Similarity is the key here. If scientists make an AI that has a human body and acts humanly, we will feel like that AI is sentient. But if they stick to the complex reasoning type of AI, without a body and emotions, we will feel like they are just machines and not sentient.
Well, I feel like you will think that I am avoiding the question. So let me ask a question to finish this up, how do we know that the AI we are programming today are not sentient? Why are all of the questions about AI phrased in such a way that assumes the AI today do not have sentience? The reason is, of course, the one I mentioned above:
"The AI today are not similar to humans, therefore, they should not be sentient."
  Considerate: 96%  
  Substantial: 89%  
  Spelling & Grammar: 97%  
  Sentiment: Positive  
  Avg. Grade Level: 8.4  
  Sources: 0  
  Relevant (Beta): 77%  
  Learn More About Debra
You make a good point, but how do you know that AI will not ever gain the capability of emotions? Perhaps in the future it will be decided that it would be beneficial for AI to have emotions, so to better understand the emotional complexities of the task at hand. In that case, they would have all the criteria you speak of for having rights. How can we know for certain that AI will not eventually come to the reasoning that they do deserve rights too?
  Considerate: 94%  
  Substantial: 98%  
  Spelling & Grammar: 99%  
  Sentiment: Positive  
  Avg. Grade Level: 10.92  
  Sources: 0  
  Relevant (Beta): 100%  
  Learn More About Debra
  Considerate: 81%  
  Substantial: 14%  
  Spelling & Grammar: 80%  
  Sentiment: Neutral  
  Avg. Grade Level: 2.76  
  Sources: 0  
  Relevant (Beta): 98%  
  Learn More About Debra
"How can we know for certain that AI will not eventually come to the reasoning that they do deserve rights too?" well, in order for them to want rights, they need to value their own existence. I really doubt the AIs would value things without having some kind of emotions coded into them. But if we assume that it somehow happened and forget the improbability, AIs demanding rights without it being coded into them would surprise everyone enough to make them believe they are sentient.
Still, I am not expressing my own opinion about whether AIs should rights. I am just predicting how things would develop under certain conditions.
  Considerate: 90%  
  Substantial: 88%  
  Spelling & Grammar: 95%  
  Sentiment: Positive  
  Avg. Grade Level: 9.58  
  Sources: 0  
  Relevant (Beta): 93%  
  Learn More About Debra
I think that, as our technology evolves and, along with it, our perception of reality, what we see as "living" or "intelligent" beings will also change significantly. Today we might expect the intelligent being to have a body, to exhibit some level of independent thinking, to have personal desires. Perhaps in 100 years a box that talks to us will be seen as just as valid and intelligent as our own organisms, if not more. It is even possible that we will eventually turn into a hive mind species, at which point we will not see intelligent beings as independent organisms, but, rather, will seek to integrate them into our system, making the whole question of "Is this being intelligent?" obsolete.
The concept of rights in that case also will become somewhat obsolete: hive mind species act as one, and there is only one organism that obviously has all the rights - individual parts of it being about as valid subjects of rights, as my hand is a subject to having its own rights.
A more "down to Earth" scenario is that eventually the AIs will become an inherent part of our lives, and conversing and interacting with them will be as natural to us as drinking water. At that point AIs will be an essential part of our society, and it should be obvious to us then that they deserve just as many rights as we do. Much like we defeated slavery, recognizing the affected people as inherent members of our society - similarly I can see "liberation of AIs" happening in the foreseeable future, where what was seen yesterday as mere appliances today becomes a new societal subgroup.
  Considerate: 85%  
  Substantial: 98%  
  Spelling & Grammar: 95%  
  Sentiment: Neutral  
  Avg. Grade Level: 13.18  
  Sources: 0  
  Relevant (Beta): 96%  
  Learn More About Debra
Well, your whole argument relies on huge assumptions that do not have any real basis. But even if we do accept those assumptions, this does not answer the main question which is the reason why you originally made assumptions: "Will AI ever be sentient?"
I agreed to your argument because of this line:
This is true. In the (probably) not so near future, we might make important discoveries about what intelligence and sentience is. And this question could be more answerable in the light of this new discovery.
  Considerate: 79%  
  Substantial: 80%  
  Spelling & Grammar: 91%  
  Sentiment: Positive  
  Avg. Grade Level: 11.12  
  Sources: 0  
  Relevant (Beta): 80%  
  Learn More About Debra
  Considerate: 88%  
  Substantial: 96%  
  Spelling & Grammar: 94%  
  Sentiment: Neutral  
  Avg. Grade Level: 13.06  
  Sources: 0  
  Relevant (Beta): 84%  
  Learn More About Debra
  Considerate: 75%  
  Substantial: 93%  
  Spelling & Grammar: 100%  
  Sentiment: Neutral  
  Avg. Grade Level: 12.22  
  Sources: 0  
  Relevant (Beta): 99%  
  Learn More About Debra